W6. Linear Systems of Equations, Gaussian Elimination, Subspaces, Matrix Properties, Rank, Orthogonality, Invertibility

Author

Salman Ahmadi-Asl

Published

October 6, 2025

Quiz | Flashcards

1. Summary

1.1 Linear Systems of Equations
1.1.1 Introduction

A linear system of equations is a collection of one or more linear equations involving the same set of variables. The goal is often to find the values of these variables that satisfy all equations simultaneously.

1.1.2 Matrix Representation

A system of linear equations can be represented compactly using matrices. Consider the system: \[ \begin{cases} 2x + 3y + 1z = 8 \\ 4x + 7y + 5z = 20 \\ -2y + 2z = 0 \end{cases} \] This can be written in the form \(Ax = b\), where:

  • \(A\) is the coefficient matrix, containing the coefficients of the variables. \[ A = \begin{bmatrix} 2 & 3 & 1 \\ 4 & 7 & 5 \\ 0 & -2 & 2 \end{bmatrix} \]
  • \(x\) is the vector of unknowns. \[ x = \begin{bmatrix} x \\ y \\ z \end{bmatrix} \]
  • \(b\) is the vector of constant terms. \[ b = \begin{bmatrix} 8 \\ 20 \\ 0 \end{bmatrix} \]

The augmented matrix is formed by combining the coefficient matrix \(A\) and the constant vector \(b\). It represents the entire system in a single matrix. \[ [A|b] = \left[\begin{array}{ccc|c} 2 & 3 & 1 & 8 \\ 4 & 7 & 5 & 20 \\ 0 & -2 & 2 & 0 \end{array}\right] \]

1.1.3 Solution Types

A linear system can have:

  • No solution: The system is inconsistent or incompatible.
  • Exactly one solution: The system is consistent and has a unique solution.
  • Infinitely many solutions: The system is consistent and indeterminate.
1.2 Methods for Solving Linear Systems
1.2.1 Gaussian Elimination

This method transforms the augmented matrix into an upper triangular form (row echelon form) using elementary row operations. Once in this form, the solution can be found by back substitution.

Elementary Row Operations:

  1. Swapping two rows.
  2. Multiplying a row by a non-zero scalar.
  3. Adding a multiple of one row to another row.
1.2.2 Cramer’s Rule

This method finds the solution using determinants. For a system \(Ax = b\), the value of each variable \(x_i\) is given by the ratio of two determinants: \[ x_i = \frac{\det(A_i)}{\det(A)} \] where \(A_i\) is the matrix formed by replacing the \(i\)-th column of \(A\) with the vector \(b\). A unique solution exists only if \(\det(A) \neq 0\).

1.2.3 Inverse Matrix Method

If the coefficient matrix \(A\) is square and invertible, the solution is given by: \[ x = A^{-1}b \] The inverse \(A^{-1}\) can be found using methods like the adjugate matrix or Gauss-Jordan elimination.

1.3 Gaussian Elimination Process in Detail
1.3.1 Pivots and Echelon Forms
  • A pivot is the first non-zero entry in a row of a matrix in row echelon form. An element of a matrix is a pivot if it is non-zero and all entries to its left and below it are zero.
  • Row Echelon Form (REF): A matrix is in REF if:
    1. All non-zero rows are above any rows of all zeros.
    2. Each leading entry (pivot) of a row is in a column to the right of the leading entry of the row above it.
  • Reduced Row Echelon Form (RREF): A matrix is in RREF if it is in REF, and:
    1. Every pivot is 1.
    2. Each pivot is the only non-zero entry in its column.
1.3.2 Free and Leading Variables

In the context of a linear system, after reducing the augmented matrix to RREF:

  • Leading variables are the variables corresponding to columns that contain a pivot.
  • Free variables (or parameters) are the variables corresponding to columns that do not contain a pivot. They can be assigned any value, which is why systems with free variables have infinite solutions.
1.3.3 Homogeneous Systems

A homogeneous system is of the form \(Ax = 0\). It always has the trivial solution (\(x=0\)). A non-trivial solution exists if and only if the system has at least one free variable. If a homogeneous system has more variables than equations, it is guaranteed to have infinitely many non-trivial solutions.

1.4 The Four Fundamental Subspaces

For any \(m \times n\) matrix \(A\), there are four fundamental subspaces that describe its properties.

1.4.1 Column Space, C(A)
  • What it is: The column space is the set of all possible linear combinations of the column vectors of \(A\). It represents all possible outputs \(b\) for which the system \(Ax=b\) is solvable.
  • Location: It is a subspace of \(\mathbb{R}^m\).
  • Basis: The pivot columns of the original matrix \(A\) form a basis for \(C(A)\).
  • Dimension: The dimension of \(C(A)\) is the rank of \(A\), denoted as \(r\).
1.4.2 Nullspace, N(A)
  • What it is: The nullspace is the set of all solutions to the homogeneous equation \(Ax=0\).
  • Location: It is a subspace of \(\mathbb{R}^n\).
  • Basis: The basis is formed by the “special solutions” to \(Ax=0\), with one basis vector for each free variable.
  • Dimension: The dimension of \(N(A)\), called the nullity, is the number of free variables, which is \(n-r\).
1.4.3 Row Space, C(Aᵀ)
  • What it is: The row space is the set of all possible linear combinations of the row vectors of \(A\). It is equivalent to the column space of the transpose of \(A\), \(A^T\).
  • Location: It is a subspace of \(\mathbb{R}^n\).
  • Basis: The non-zero rows of the row echelon form (R or RREF) of \(A\) form a basis for the row space.
  • Dimension: The dimension of the row space is also equal to the rank, \(r\).
1.4.4 Left Nullspace, N(Aᵀ)
  • What it is: The left nullspace is the nullspace of the transpose of \(A\). It consists of all vectors \(y\) such that \(A^T y = 0\), or equivalently, \(y^T A = 0\).
  • Location: It is a subspace of \(\mathbb{R}^m\).
  • Dimension: The dimension of the left nullspace is \(m-r\).
1.5 Rank of a Matrix
  • The rank of a matrix \(A\) is a fundamental measure of its “information content.” It is defined in several equivalent ways:
    1. The dimension of the column space.
    2. The dimension of the row space.
    3. The number of pivots in its row echelon form.
    4. The number of linearly independent columns (or rows).
  • For an \(m \times n\) matrix, the rank is always less than or equal to the smaller of the two dimensions: \(rank(A) ≤ min(m, n)\).
1.6 Orthogonal Matrices
  • A square matrix \(Q\) is orthogonal if its columns (and rows) form an orthonormal basis. This means the dot product of any column with itself is 1, and the dot product with any other column is 0.
  • The defining property is that its transpose is equal to its inverse: \(Q^T Q = Q Q^T = I\), or \(Q^{-1} = Q^T\).
  • Properties:
    • They preserve lengths: \(||Qx|| = ||x||\).
    • They preserve angles and dot products: \((Qx) \cdot (Qy) = x \cdot y\).
    • Their determinant is always \(\pm 1\).
    • Examples include rotation and reflection matrices.
1.7 Matrix Invertibility

A square \(n \times n\) matrix \(A\) is invertible (or non-singular) if there exists a matrix \(A^{-1}\) such that \(A A^{-1} = A^{-1} A = I\).

1.7.1 Conditions for Invertibility

The following conditions are equivalent for an \(n \times n\) matrix \(A\):

  1. \(A\) is invertible.
  2. The determinant of \(A\) is non-zero (\(\det(A) \neq 0\)).
  3. The rank of \(A\) is \(n\) (it has full rank).
  4. The columns of \(A\) are linearly independent.
  5. The rows of \(A\) are linearly independent.
  6. The nullspace of \(A\) contains only the trivial solution, \(x=0\).
  7. The equation \(Ax=b\) has a unique solution for every vector \(b\).
  8. The reduced row echelon form of \(A\) is the identity matrix \(I\).

Rectangular matrices are never invertible in the standard sense.

1.7.2 Rank and Singularity
  • A square matrix is non-singular (invertible) if its rank is equal to its number of columns (\(rank(A) = n\)).
  • A square matrix is singular (non-invertible) if its rank is less than its number of columns (\(rank(A) < n\)). This indicates linear dependence among its columns.
1.8 Important Inequalities
  • Rank-Nullity Theorem: For any \(m \times n\) matrix \(A\), \(rank(A) + nullity(A) = n\). (Number of pivot columns + number of free columns = total number of columns).
  • Other Inequalities:
    • \(rank(A + B) ≤ rank(A) + rank(B)\)
    • \(rank(AB) ≤ min(rank(A), rank(B))\)

2. Definitions

  • Coefficient Matrix: A matrix whose entries are the coefficients of the variables in a system of linear equations.
  • Augmented Matrix: A matrix representing a linear system, formed by appending the column of constant terms to the coefficient matrix.
  • Pivot: The first non-zero element in a row of a matrix in row echelon form.
  • Row Echelon Form (REF): A simplified form of a matrix where all non-zero rows are above zero rows, and pivots move to the right in successive rows.
  • Reduced Row Echelon Form (RREF): A stricter form of REF where each pivot is 1 and is the only non-zero entry in its column.
  • Column Space C(A): The vector space spanned by the columns of matrix A.
  • Row Space C(Aᵀ): The vector space spanned by the rows of matrix A.
  • Nullspace N(A): The set of all vectors \(x\) for which \(Ax = 0\).
  • Rank: The dimension of the column space (or row space) of a matrix, equal to the number of pivots.
  • Nullity: The dimension of the nullspace of a matrix.
  • Orthogonal Matrix: A square matrix \(Q\) for which its inverse is equal to its transpose (\(Q^{-1} = Q^T\)). Its columns are orthonormal vectors.
  • Invertible (or Non-singular) Matrix: A square matrix that has an inverse, a non-zero determinant, and full rank.
  • Singular Matrix: A square matrix that does not have an inverse, has a zero determinant, and is not full rank.

3. Formulas

  • System of Equations: \(Ax = b\)
  • Cramer’s Rule: \[ x_i = \frac{\det(A_i)}{\det(A)} \]
  • Inverse Matrix Solution: \(x = A^{-1}b\)
  • Inverse of a 2x2 Matrix: For \(A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\), \(A^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}\)
  • Determinant of a 2x2 Matrix: \(\det(A) = ad-bc\)
  • Orthogonal Matrix Condition: \(Q^T Q = I\) or \(Q^{-1} = Q^T\)
  • Length Preservation by Orthogonal Matrix: \(||Qx|| = ||x||\)
  • Rank-Nullity Theorem: \(\text{rank}(A) + \text{nullity}(A) = n\)
  • Rank Sum Inequality: \(\text{rank}(A+B) \le \text{rank}(A) + \text{rank}(B)\)
  • Rank Product Inequality: \(\text{rank}(AB) \le \min(\text{rank}(A), \text{rank}(B))\)
  • Sylvester’s Rank Inequality: \(\text{rank}(A) + \text{rank}(B) - n \le \text{rank}(AB)\)
  • Linear Independence Condition: \(c_1\vec{v_1} + c_2\vec{v_2} + \dots + c_n\vec{v_n} = \vec{0}\) has only the trivial solution (\(c_1=c_2=\dots=c_n=0\)).

4. Examples

4.1. Solve a System of Linear Equations (Lab 5, Task 1)

Solve the system: \[ \begin{cases} 2x + 3y + 1z = 8 \\ 4x + 7y + 5z = 20 \\ -2y + 2z = 0 \end{cases} \]

Click to see the solution
  1. Write the Augmented Matrix: Represent the system of equations as an augmented matrix.
    • \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 4 & 7 & 5 & | & 20 \\ 0 & -2 & 2 & | & 0 \end{bmatrix}\)
  2. Use Gaussian Elimination: Create zeros below the first pivot (the top-left ‘2’).
    • Perform the operation \(R_2 \to R_2 - 2R_1\):
      • \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 0 & 1 & 3 & | & 4 \\ 0 & -2 & 2 & | & 0 \end{bmatrix}\)
    • Create a zero below the second pivot (the ‘1’ in the second row). Perform the operation \(R_3 \to R_3 + 2R_2\):
      • \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 0 & 1 & 3 & | & 4 \\ 0 & 0 & 8 & | & 8 \end{bmatrix}\)
  3. Perform Back Substitution: Convert the row-echelon matrix back into equations.
    • From \(R_3\): \(8z = 8 \implies z = 1\).
    • From \(R_2\): \(y + 3z = 4 \implies y + 3(1) = 4 \implies y = 1\).
    • From \(R_1\): \(2x + 3y + z = 8 \implies 2x + 3(1) + 1 = 8 \implies 2x + 4 = 8 \implies 2x = 4 \implies x = 2\).
Answer: The solution is \(x=2, y=1, z=1\).
4.2. Nontrivial Solutions in Homogeneous Systems (Lab 5, Task 2)

Consider a homogeneous linear system \(A\mathbf{x} = \mathbf{0}\) of \(n\) equations for \(n+1\) unknowns. Does it have a non-trivial solution? (\(\mathbf{x} \neq \mathbf{0}\))

Click to see the solution
  1. Analyze the Matrix A: The system has \(n\) equations and \(n+1\) unknowns, so the coefficient matrix A has dimensions \(n \times (n+1)\).
  2. Consider the Rank: The rank of a matrix is the number of pivots, and it cannot be greater than the number of rows or the number of columns. In this case, \(\text{rank}(A) \le n\).
  3. Relate Rank to Free Variables: The number of free variables in a system is equal to the number of columns (unknowns) minus the rank.
    • Number of free variables = \((n+1) - \text{rank}(A)\).
  4. Determine the Number of Free Variables: Since we know \(\text{rank}(A) \le n\), the number of free variables must be at least \((n+1) - n = 1\).
  5. Conclusion: A homogeneous system has non-trivial solutions if and only if there is at least one free variable. Since this system is guaranteed to have at least one free variable, it must have a non-trivial solution.
Answer: Yes, it must have a non-trivial solution because there are more unknowns than equations.
4.3. Show Existence of Nontrivial Solutions (Lab 5, Task 3)

Show that the following homogeneous system has nontrivial solutions: \[ \begin{cases} 1x_1 - 1x_2 + 2x_3 - 1x_4 = 0 \\ 2x_1 + 2x_2 + 0x_3 + 1x_4 = 0 \\ 3x_1 + 1x_2 + 2x_3 - 1x_4 = 0 \end{cases} \]

Click to see the solution
  1. Analyze the System: This is a homogeneous system with 3 equations and 4 unknowns.
  2. Apply the Rank Theorem: The number of variables (4) is greater than the number of equations (3). This means the rank of the coefficient matrix can be at most 3.
  3. Guarantee of Free Variables: The number of free variables is given by (number of variables) - (rank). Since the rank is at most 3, the number of free variables is at least \(4 - 3 = 1\).
  4. Conclusion: Because the system is homogeneous and is guaranteed to have at least one free variable, there must be an infinite number of solutions, which means there are nontrivial solutions.
Answer: The system has more unknowns (4) than equations (3), which guarantees the existence of non-trivial solutions for a homogeneous system.
4.4. Gaussian Elimination with Pivoting (Lab 5, Task 4)

Solve the system of equations using Gaussian elimination process with pivoting by maximum element (absolute value): \[ \begin{cases} 2x_1 + 1x_2 + 3x_3 + 2x_4 = 0 \\ 2x_1 + 1x_2 + 5x_3 + 1x_4 = 2 \\ 2x_1 + 1x_2 + 4x_3 + 2x_4 = 1 \\ 1x_1 + 3x_2 + 3x_3 + 2x_4 = 6 \end{cases} \]

Click to see the solution
  1. Write the Augmented Matrix:
    • \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 2 & 1 & 5 & 1 & | & 2 \\ 2 & 1 & 4 & 2 & | & 1 \\ 1 & 3 & 3 & 2 & | & 6 \end{bmatrix}\)
  2. Step 1: Pivoting: The elements in the first column are {2, 2, 2, 1}. The maximum absolute value is 2. No row swap is needed.
  3. Step 1: Elimination:
    • \(R_2 \to R_2 - R_1\), \(R_3 \to R_3 - R_1\), \(R_4 \to R_4 - 0.5R_1\)
    • \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 1 & 0 & | & 1 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \end{bmatrix}\)
  4. Step 2: Pivoting: Look at the sub-matrix from row 2 down. The elements in the second column are {0, 0, 2.5}. The maximum absolute value is 2.5 in row 4. Swap R2 and R4.
    • \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 1 & 0 & | & 1 \\ 0 & 0 & 2 & -1 & | & 2 \end{bmatrix}\)
  5. Step 2: Elimination: No elimination is needed for the second column as the elements below the pivot are already zero.
  6. Step 3: Pivoting: Look at the sub-matrix from row 3 down. The elements in the third column are {1, 2}. The maximum absolute value is 2 in row 4. Swap R3 and R4.
    • \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 1 & 0 & | & 1 \end{bmatrix}\)
  7. Step 3: Elimination:
    • \(R_4 \to R_4 - 0.5R_3\)
    • \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 0 & 0.5 & | & 0 \end{bmatrix}\)
  8. Back Substitution:
    • From \(R_4\): \(0.5x_4 = 0 \implies x_4 = 0\).
    • From \(R_3\): \(2x_3 - x_4 = 2 \implies 2x_3 - 0 = 2 \implies x_3 = 1\).
    • From \(R_2\): \(2.5x_2 + 1.5x_3 + x_4 = 6 \implies 2.5x_2 + 1.5(1) + 0 = 6 \implies 2.5x_2 = 4.5 \implies x_2 = 1.8\).
    • From \(R_1\): \(2x_1 + x_2 + 3x_3 + 2x_4 = 0 \implies 2x_1 + 1.8 + 3(1) + 0 = 0 \implies 2x_1 + 4.8 = 0 \implies 2x_1 = -4.8 \implies x_1 = -2.4\).
Answer: The solution is \(x_1 = -2.4, x_2 = 1.8, x_3 = 1, x_4 = 0\).
4.5. Analysis of a Linear System (Lab 5, Task 5)

The system of equations is given: \[ \begin{cases} 1x_1 + 2x_2 + 3x_3 + 5x_4 = b_1 \\ 2x_1 + 4x_2 + 8x_3 + 12x_4 = b_2 \\ 3x_1 + 6x_2 + 7x_3 + 13x_4 = b_3 \end{cases} \]

  1. Reduce \([A|\mathbf{b}]\) to \([U|\mathbf{c}]\).
  2. Find the conditions on \((b_1, b_2, b_3)\) to have a solution.
  3. Describe the column space of A.
  4. Describe the nullspace of A.
  5. Find a particular solution to \(A\mathbf{x} = (0, 6, -6)\) and the complete solution \(\mathbf{x}_p + \mathbf{x}_n\).
Click to see the solution
  1. Reduce to Echelon Form \([U|\mathbf{c}]\):
    • \(\begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 2 & 4 & 8 & 12 & | & b_2 \\ 3 & 6 & 7 & 13 & | & b_3 \end{bmatrix} \xrightarrow[R_3 \to R_3-3R_1]{R_2 \to R_2-2R_1} \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & -2 & -2 & | & b_3-3b_1 \end{bmatrix} \xrightarrow{R_3 \to R_3+R_2} \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & 0 & 0 & | & b_3-3b_1+b_2-2b_1 \end{bmatrix}\)
    • \(U|\mathbf{c} = \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & 0 & 0 & | & b_2+b_3-5b_1 \end{bmatrix}\)
  2. Conditions on \(\mathbf{b}\) for a Solution:
    • For the system to be consistent, the last row must not be of the form \([0 \ 0 \ 0 \ 0 \ | \ \text{non-zero}]\).
    • Therefore, the condition is \(b_2 + b_3 - 5b_1 = 0\).
  3. Column Space of A:
    • The column space is the set of all vectors \(\mathbf{b}\) for which a solution exists.
    • This is the plane in \(\mathbb{R}^3\) defined by the equation \(-5b_1 + b_2 + b_3 = 0\).
  4. Nullspace of A:
    • Solve \(A\mathbf{x} = \mathbf{0}\) using the echelon form. The pivots are in columns 1 and 3 (\(x_1, x_3\)). The free variables are \(x_2, x_4\).
    • \(2x_3 + 2x_4 = 0 \implies x_3 = -x_4\).
    • \(x_1 + 2x_2 + 3x_3 + 5x_4 = 0 \implies x_1 + 2x_2 + 3(-x_4) + 5x_4 = 0 \implies x_1 + 2x_2 + 2x_4 = 0 \implies x_1 = -2x_2 - 2x_4\).
    • The nullspace solution is \(\mathbf{x}_n = \begin{bmatrix} -2x_2 - 2x_4 \\ x_2 \\ -x_4 \\ x_4 \end{bmatrix} = x_2\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_4\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\). The nullspace is the span of these two vectors.
  5. Particular and Complete Solution:
    • We are given \(\mathbf{b} = (0, 6, -6)\). Check the condition: \(6 + (-6) - 5(0) = 0\). It holds.
    • Use the reduced system with this \(\mathbf{b}\):
      • \(2x_3 + 2x_4 = b_2-2b_1 = 6-0=6 \implies x_3 + x_4 = 3\).
      • \(x_1 + 2x_2 + 3x_3 + 5x_4 = b_1 = 0\).
    • To find a particular solution \(\mathbf{x}_p\), set free variables to zero: \(x_2=0, x_4=0\).
      • \(x_3 + 0 = 3 \implies x_3 = 3\).
      • \(x_1 + 0 + 3(3) + 0 = 0 \implies x_1 = -9\).
    • So, \(\mathbf{x}_p = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix}\).
    • The complete solution is \(\mathbf{x} = \mathbf{x}_p + \mathbf{x}_n = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} + c_1\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c_2\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).

Answer:

  1. See the echelon form above.
  2. The condition is \(b_2 + b_3 - 5b_1 = 0\).
  3. The column space is the plane \(-5b_1 + b_2 + b_3 = 0\).
  4. The nullspace is the span of \(\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) and \(\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).
  5. The complete solution is \(\mathbf{x} = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} + c_1\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c_2\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).
4.6. Determine the Rank of a Matrix (Lecture 5, Example 1)

Find the rank of the matrix \(A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\).

Click to see the solution
  1. Definition of Rank: The rank of a matrix is the maximum number of linearly independent columns (or rows) in the matrix.
  2. Check for Linear Dependence: We can inspect the relationship between the columns. Let the columns be \(\mathbf{c}_1, \mathbf{c}_2, \mathbf{c}_3\).
    • Notice that \(\mathbf{c}_2 - \mathbf{c}_1 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} - \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\).
    • Also, \(\mathbf{c}_3 - \mathbf{c}_2 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix} - \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\).
    • Since \(\mathbf{c}_2 - \mathbf{c}_1 = \mathbf{c}_3 - \mathbf{c}_2\), we can rearrange this to get \(\mathbf{c}_1 - 2\mathbf{c}_2 + \mathbf{c}_3 = \mathbf{0}\). This shows that the columns are linearly dependent.
  3. Find a Linearly Independent Subset: The columns are not all linearly independent, so the rank is less than 3.
    • Columns \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}\) and \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}\) are not scalar multiples of each other, so they are linearly independent.
  4. Conclusion: The largest set of linearly independent columns has two vectors. Therefore, the rank is 2.
Answer: The rank of matrix A is 2.
4.7. Rank and Singularity (Lecture 5, Example 2)

Determine the rank of the matrices \(A = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}\) and \(B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}\) and state whether they are singular or non-singular.

Click to see the solution
  1. Analyze Matrix A:
    • Linear Dependence: The second column, \(\begin{bmatrix} 2 \\ 4 \end{bmatrix}\), is exactly 2 times the first column, \(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\). Since the columns are linearly dependent, the maximum number of linearly independent columns is 1.
    • Rank: The rank of A is 1.
    • Singularity: A square matrix is singular if its determinant is 0. \(\det(A) = (1)(4) - (2)(2) = 0\). Since the determinant is 0, the matrix is singular. (Note: A square matrix is singular if and only if its rank is less than its dimension).
  2. Analyze Matrix B:
    • Linear Dependence: The second column, \(\begin{bmatrix} 2 \\ 4 \end{bmatrix}\), is not a scalar multiple of the first column, \(\begin{bmatrix} 1 \\ 3 \end{bmatrix}\). The columns are linearly independent.
    • Rank: The rank of B is 2.
    • Singularity: \(\det(B) = (1)(4) - (2)(3) = 4 - 6 = -2\). Since the determinant is not 0, the matrix is non-singular.

Answer:

  • Matrix A has rank 1 and is singular.
  • Matrix B has rank 2 and is non-singular.
4.8. Define the Column Space of a Matrix (Lecture 5, Example 3)

For the matrix \(A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}\), describe its column space.

Click to see the solution
  1. Identify the Column Vectors: The columns of the matrix A are the vectors that span the column space.
    • Column 1: \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 3 \end{bmatrix}\)
    • Column 2: \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 4 \end{bmatrix}\)
  2. Define the Column Space: The column space is the set of all possible linear combinations of the column vectors. This is the subspace spanned by the columns.
  3. Write the General Form: Any vector \(\mathbf{v}\) in the column space of A can be written in the form \(\mathbf{v} = k_1\mathbf{c}_1 + k_2\mathbf{c}_2\) for some scalars \(k_1\) and \(k_2\).
    • Column space of A = \(\left\{ k_1\begin{bmatrix} 1 \\ 3 \end{bmatrix} + k_2\begin{bmatrix} 2 \\ 4 \end{bmatrix} \ \middle|\ k_1, k_2 \in \mathbb{R} \right\}\).
  4. Geometric Interpretation: Since the two column vectors are linearly independent (as shown in the previous problem), they form a basis for \(\mathbb{R}^2\). Therefore, their column space is the entire \(\mathbb{R}^2\) plane.
Answer: The column space consists of all vectors of the form \(k_1\begin{bmatrix} 1 \\ 3 \end{bmatrix} + k_2\begin{bmatrix} 2 \\ 4 \end{bmatrix}\). Geometrically, this is the entire \(\mathbb{R}^2\) plane.
4.9. Solve a System using Gaussian Elimination (Lecture 5, Example 4)

Solve the system using Gaussian elimination: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]

Click to see the solution
  1. Write the Augmented Matrix:
    • \(\begin{bmatrix} 3 & 2 & | & 7 \\ 2 & -4 & | & -2 \end{bmatrix}\)
  2. Perform Row Operations: The goal is to create a zero in the bottom-left position to get an upper triangular matrix (row echelon form).
    • Perform the operation \(R_2 \to R_2 - \frac{2}{3}R_1\).
    • New \(R_2\): \([2 - \frac{2}{3}(3), -4 - \frac{2}{3}(2) \ | \ -2 - \frac{2}{3}(7)] = [0, -4 - \frac{4}{3} \ | \ -2 - \frac{14}{3}] = [0, -\frac{16}{3} \ | \ -\frac{20}{3}]\)
    • The matrix becomes: \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & -16/3 & | & -20/3 \end{bmatrix}\)
  3. Use Back Substitution:
    • From the second row: \(-\frac{16}{3}x_2 = -\frac{20}{3} \implies 16x_2 = 20 \implies x_2 = \frac{20}{16} = \frac{5}{4}\).
    • From the first row: \(3x_1 + 2x_2 = 7 \implies 3x_1 + 2(\frac{5}{4}) = 7 \implies 3x_1 + \frac{5}{2} = 7 \implies 3x_1 = \frac{14}{2} - \frac{5}{2} = \frac{9}{2} \implies x_1 = \frac{3}{2}\).
Answer: The solution is \(x_1 = 3/2\) and \(x_2 = 5/4\).
4.10. Solve a System using Gauss-Jordan Elimination (Lecture 5, Example 5)

Solve the system using Gauss-Jordan elimination: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]

Click to see the solution
  1. Write the Augmented Matrix:
    • \(\begin{bmatrix} 3 & 2 & | & 7 \\ 2 & -4 & | & -2 \end{bmatrix}\)
  2. Create Row Echelon Form: First, perform Gaussian elimination as in the previous example.
    • \(R_2 \to R_2 - \frac{2}{3}R_1\) gives \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & -16/3 & | & -20/3 \end{bmatrix}\).
  3. Normalize the Pivots: Make each pivot (the first non-zero entry in each row) equal to 1.
    • \(R_2 \to -\frac{3}{16}R_2\) gives \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
    • \(R_1 \to \frac{1}{3}R_1\) gives \(\begin{bmatrix} 1 & 2/3 & | & 7/3 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
  4. Create Zeros Above the Pivots: The goal is to reach reduced row echelon form (an identity matrix on the left).
    • Perform the operation \(R_1 \to R_1 - \frac{2}{3}R_2\).
    • New \(R_1\): \([1-0, \frac{2}{3}-\frac{2}{3}(1) \ | \ \frac{7}{3} - \frac{2}{3}(\frac{5}{4})] = [1, 0 \ | \ \frac{7}{3} - \frac{10}{12}] = [1, 0 \ | \ \frac{28}{12} - \frac{10}{12}] = [1, 0 \ | \ \frac{18}{12}] = [1, 0 \ | \ \frac{3}{2}]\).
    • The matrix becomes: \(\begin{bmatrix} 1 & 0 & | & 3/2 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
  5. Read the Solution: The matrix is now in the form \([I|\mathbf{x}]\), so the solution is directly visible.
    • \(x_1 = 3/2\)
    • \(x_2 = 5/4\)
Answer: The solution is \(x_1 = 3/2\) and \(x_2 = 5/4\).
4.11. Show a System has No Solution (Lecture 5, Example 6)

Consider the \(3 \times 3\) linear system: \[ \begin{cases} x + y + z = 1 \\ 2x + 2y + 2z = 3 \\ 3x + y - z = 2 \end{cases} \] Show this system has no solution using Gaussian elimination.

Click to see the solution
  1. Write the Augmented Matrix:
    • \(\begin{bmatrix} 1 & 1 & 1 & | & 1 \\ 2 & 2 & 2 & | & 3 \\ 3 & 1 & -1 & | & 2 \end{bmatrix}\)
  2. Perform Row Operations:
    • Perform \(R_2 \to R_2 - 2R_1\).
    • New \(R_2\): \([2-2(1), 2-2(1), 2-2(1) \ | \ 3-2(1)] = [0, 0, 0 \ | \ 1]\).
    • The matrix becomes: \(\begin{bmatrix} 1 & 1 & 1 & | & 1 \\ 0 & 0 & 0 & | & 1 \\ 3 & 1 & -1 & | & 2 \end{bmatrix}\).
  3. Analyze the Result:
    • The second row of the matrix corresponds to the equation \(0x + 0y + 0z = 1\).
    • This simplifies to the equation \(0 = 1\), which is a contradiction.
  4. Conclusion:
    • Since the row reduction process leads to a contradictory statement, the original system of equations is inconsistent.
Answer: The system has no solution because Gaussian elimination leads to the contradictory equation \(0 = 1\).
4.12. Solve a System using Cramer’s Rule (Lecture 5, Example 7)

Solve the system using Cramer’s Rule: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]

Click to see the solution
  1. Define the Coefficient Matrix and Vector:
    • Coefficient matrix \(A = \begin{bmatrix} 3 & 2 \\ 2 & -4 \end{bmatrix}\).
    • Constant vector \(\mathbf{b} = \begin{bmatrix} 7 \\ -2 \end{bmatrix}\).
  2. Calculate the Determinant of A:
    • \(D = \det(A) = (3)(-4) - (2)(2) = -12 - 4 = -16\).
    • Since \(D \neq 0\), a unique solution exists.
  3. Calculate the Determinant for \(x_1\): Replace the first column of A with the vector \(\mathbf{b}\).
    • \(A_1 = \begin{bmatrix} 7 & 2 \\ -2 & -4 \end{bmatrix}\).
    • \(D_1 = \det(A_1) = (7)(-4) - (2)(-2) = -28 + 4 = -24\).
  4. Calculate the Determinant for \(x_2\): Replace the second column of A with the vector \(\mathbf{b}\).
    • \(A_2 = \begin{bmatrix} 3 & 7 \\ 2 & -2 \end{bmatrix}\).
    • \(D_2 = \det(A_2) = (3)(-2) - (7)(2) = -6 - 14 = -20\).
  5. Find the Solutions:
    • \(x_1 = D_1 / D = -24 / -16 = 3/2\).
    • \(x_2 = D_2 / D = -20 / -16 = 5/4\).
Answer: The solution is \(x_1 = 3/2\) and \(x_2 = 5/4\).
4.13. Column Space Analysis (Tutorial 5, Task 1)

Find the column space of \(A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\). Describe it geometrically and find its dimension.

Click to see the solution
  1. Identify the Column Vectors: The column space is the span of the column vectors of A.
    • \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}\), \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}\), \(\mathbf{c}_3 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix}\)
  2. Determine Linear Independence: We check if the vectors are linearly independent. Notice that \(\mathbf{c}_1 + \mathbf{c}_3 = 2\mathbf{c}_2\). This can be rewritten as \(\mathbf{c}_1 - 2\mathbf{c}_2 + \mathbf{c}_3 = \mathbf{0}\), which shows the columns are linearly dependent.
  3. Find a Basis for the Column Space: Since the three vectors are dependent, the dimension of the column space (the rank) is less than 3. Vectors \(\mathbf{c}_1\) and \(\mathbf{c}_2\) are not scalar multiples of each other, so they are linearly independent. They can form a basis for the column space.
  4. Describe the Column Space: The column space is the set of all linear combinations of the basis vectors.
    • \(C(A) = \text{span}\{\mathbf{c}_1, \mathbf{c}_2\} = \left\{ k_1\begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} + k_2\begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} \ \middle|\ k_1, k_2 \in \mathbb{R} \right\}\).
  5. Geometric Description and Dimension: The dimension of the column space is the number of vectors in its basis, which is 2. A two-dimensional subspace of \(\mathbb{R}^3\) is a plane passing through the origin.
Answer: The column space is the plane in \(\mathbb{R}^3\) spanned by the vectors \(\begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}\) and \(\begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}\). Its dimension is 2.
4.14. Column Space vs Row Space (Tutorial 5, Task 2)

For \(B = \begin{bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 3 & 6 & 3 \end{bmatrix}\): Find the column space and its dimension, find the row space and its dimension, and verify they have the same dimension.

Click to see the solution
  1. Find the Column Space:
    • The column vectors are \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\), \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix}\), \(\mathbf{c}_3 = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\).
    • We can see that \(\mathbf{c}_2 = 2\mathbf{c}_1\) and \(\mathbf{c}_3 = \mathbf{c}_1\). All columns are multiples of the first column.
    • The column space is the span of the first column: \(C(B) = \text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\right\}\). This is a line in \(\mathbb{R}^3\).
    • The dimension of the column space is 1.
  2. Find the Row Space:
    • The row vectors are \(\mathbf{r}_1 = \begin{bmatrix} 1 & 2 & 1 \end{bmatrix}\), \(\mathbf{r}_2 = \begin{bmatrix} 2 & 4 & 2 \end{bmatrix}\), \(\mathbf{r}_3 = \begin{bmatrix} 3 & 6 & 3 \end{bmatrix}\).
    • We can see that \(\mathbf{r}_2 = 2\mathbf{r}_1\) and \(\mathbf{r}_3 = 3\mathbf{r}_1\). All rows are multiples of the first row.
    • The row space is the span of the first row: \(R(B) = \text{span}\{\begin{bmatrix} 1 & 2 & 1 \end{bmatrix}\}\). This is a line in \(\mathbb{R}^3\).
    • The dimension of the row space is 1.
  3. Verify Dimensions: The dimension of the column space is 1, and the dimension of the row space is 1. They are equal, as expected by the Rank Theorem.
Answer: The column space is \(\text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\right\}\) with dimension 1. The row space is \(\text{span}\{\begin{bmatrix} 1 & 2 & 1 \end{bmatrix}\}\) with dimension 1. The dimensions are the same.
4.15. Verifying Orthogonal Matrices (Tutorial 5, Task 3)

Which of these matrices are orthogonal? Verify your answers. \(Q_1 = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}\), \(Q_2 = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\), \(Q_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}\)

Click to see the solution
  1. Condition for Orthogonality: A matrix \(Q\) is orthogonal if its columns form an orthonormal set (they are mutually orthogonal unit vectors). This is equivalent to the condition \(Q^T Q = I\).
  2. Check \(Q_1\):
    • The columns are \(\mathbf{q}_1 = \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix}\) and \(\mathbf{q}_2 = \begin{bmatrix} -\sin\theta \\ \cos\theta \end{bmatrix}\).
    • Check norms (lengths): \(||\mathbf{q}_1||^2 = \cos^2\theta + \sin^2\theta = 1\) and \(||\mathbf{q}_2||^2 = (-\sin\theta)^2 + \cos^2\theta = 1\). The columns are unit vectors.
    • Check orthogonality (dot product): \(\mathbf{q}_1 \cdot \mathbf{q}_2 = (\cos\theta)(-\sin\theta) + (\sin\theta)(\cos\theta) = 0\). The columns are orthogonal.
    • Conclusion: \(Q_1\) is an orthogonal matrix.
  3. Check \(Q_2\):
    • The columns are \(\mathbf{q}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}\) and \(\mathbf{q}_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}\).
    • Check norms: \(||\mathbf{q}_1||^2 = 1^2 + 1^2 = 2 \neq 1\). The columns are not unit vectors.
    • Conclusion: \(Q_2\) is not orthogonal.
  4. Check \(Q_3\):
    • The columns are \(\mathbf{q}_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\), \(\mathbf{q}_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\), \(\mathbf{q}_3 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\).
    • These are the standard basis vectors, which are known to be unit vectors and mutually orthogonal.
    • Conclusion: \(Q_3\) is an orthogonal matrix (it is a permutation matrix).
Answer: \(Q_1\) and \(Q_3\) are orthogonal; \(Q_2\) is not.
4.16. Properties of Orthogonal Matrices (Tutorial 5, Task 4)

Let \(Q\) be an \(n \times n\) orthogonal matrix. Prove that:

  1. \(||Q\mathbf{x}|| = ||\mathbf{x}||\) for any \(\mathbf{x} \in \mathbb{R}^n\)
  2. \(|\det(Q)| = 1\)
Click to see the solution

Proof of (1): Length Preservation

  1. Start with the squared norm: \(||Q\mathbf{x}||^2 = (Q\mathbf{x}) \cdot (Q\mathbf{x})\).
  2. Using the transpose property for dot products: \((Q\mathbf{x}) \cdot (Q\mathbf{x}) = (Q\mathbf{x})^T(Q\mathbf{x})\).
  3. Apply the transpose rule \((AB)^T = B^T A^T\): \((Q\mathbf{x})^T(Q\mathbf{x}) = (\mathbf{x}^T Q^T)(Q\mathbf{x}) = \mathbf{x}^T(Q^T Q)\mathbf{x}\).
  4. By definition of an orthogonal matrix, \(Q^T Q = I\).
  5. Substitute: \(\mathbf{x}^T(I)\mathbf{x} = \mathbf{x}^T\mathbf{x} = ||\mathbf{x}||^2\).
  6. We have shown \(||Q\mathbf{x}||^2 = ||\mathbf{x}||^2\). Since norms are non-negative, taking the square root gives \(||Q\mathbf{x}|| = ||\mathbf{x}||\).

Proof of (2): Determinant

  1. Start with the definition of an orthogonal matrix: \(Q^T Q = I\).
  2. Take the determinant of both sides: \(\det(Q^T Q) = \det(I)\).
  3. Use determinant properties \(\det(AB) = \det(A)\det(B)\) and \(\det(A^T) = \det(A)\): \(\det(Q^T)\det(Q) = 1\).
  4. This becomes \(\det(Q)\det(Q) = 1\), or \((\det(Q))^2 = 1\).
  5. Taking the square root of both sides gives \(\det(Q) = \pm 1\).
  6. Therefore, the absolute value is \(|\det(Q)| = 1\).
Answer: Both properties are proven using the definition \(Q^T Q = I\) and standard matrix properties.
4.17. Testing Invertibility (Tutorial 5, Task 5)

Determine which matrices are invertible and find inverses when possible: \(A = \begin{bmatrix} 2 & 1 \\ 5 & 3 \end{bmatrix}\), \(B = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\), \(C = \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & -1 \\ 2 & 0 & 4 \end{bmatrix}\)

Click to see the solution
  1. Analyze Matrix A:
    • A matrix is invertible if and only if its determinant is non-zero.
    • \(\det(A) = (2)(3) - (1)(5) = 6 - 5 = 1\).
    • Since \(\det(A) \neq 0\), A is invertible.
    • The inverse is \(A^{-1} = \frac{1}{1}\begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}\).
  2. Analyze Matrix B:
    • \(\det(B) = 1(5\cdot9 - 6\cdot8) - 2(4\cdot9 - 6\cdot7) + 3(4\cdot8 - 5\cdot7)\)
    • \(= 1(45 - 48) - 2(36 - 42) + 3(32 - 35) = 1(-3) - 2(-6) + 3(-3) = -3 + 12 - 9 = 0\).
    • Since \(\det(B) = 0\), B is not invertible.
  3. Analyze Matrix C:
    • Notice that the third row is 2 times the first row. This means the rows are linearly dependent, so the determinant must be 0.
    • \(\det(C) = 1(1\cdot4 - (-1)\cdot0) - 0(...) + 2(0\cdot0 - 1\cdot2) = 1(4) - 0 + 2(-2) = 4 - 4 = 0\).
    • Since \(\det(C) = 0\), C is not invertible.
Answer: Only matrix A is invertible, with \(A^{-1} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}\). Matrices B and C are not invertible.
4.18. Invertibility Conditions (Tutorial 5, Task 6)

Prove that for a square matrix \(A\), the following are equivalent:

  1. \(A\) is invertible
  2. \(A\mathbf{x} = \mathbf{0}\) has only the trivial solution
  3. The columns of \(A\) are linearly independent
  4. \(\det(A) \neq 0\)
Click to see the solution

This is a statement of the Invertible Matrix Theorem. We can prove the equivalence by showing a cycle of implications, for example, \((1) \Rightarrow (2) \Rightarrow (3) \Rightarrow (4) \Rightarrow (1)\).

  1. \((1) \Rightarrow (2)\): Assume A is invertible. Let \(A\mathbf{x} = \mathbf{0}\). We can multiply both sides by \(A^{-1}\): \(A^{-1}(A\mathbf{x}) = A^{-1}\mathbf{0}\). This simplifies to \((A^{-1}A)\mathbf{x} = \mathbf{0}\), then \(I\mathbf{x} = \mathbf{0}\), which means \(\mathbf{x} = \mathbf{0}\). Thus, the only solution is the trivial one.
  2. \((2) \Rightarrow (3)\): Assume \(A\mathbf{x} = \mathbf{0}\) has only the trivial solution. The definition of linear independence for the columns of A (\(\mathbf{a}_1, ..., \mathbf{a}_n\)) is that the only solution to the equation \(c_1\mathbf{a}_1 + ... + c_n\mathbf{a}_n = \mathbf{0}\) is \(c_1=...=c_n=0\). This vector equation is identical to \(A\mathbf{c} = \mathbf{0}\), where \(\mathbf{c}\) is the vector of coefficients. Our assumption means that \(\mathbf{c}\) must be the zero vector, which is precisely the definition of linear independence for the columns of A.
  3. \((3) \Rightarrow (4)\): Assume the columns of A are linearly independent. This means the rank of the \(n \times n\) matrix is \(n\). A square matrix has full rank if and only if its determinant is non-zero. (This follows from the fact that a matrix with full rank can be row-reduced to the identity matrix, which has a determinant of 1, and row operations only change the determinant by non-zero scalar factors).
  4. \((4) \Rightarrow (1)\): Assume \(\det(A) \neq 0\). The formula for the inverse of a matrix is \(A^{-1} = \frac{1}{\det(A)}\text{adj}(A)\). Since the determinant is non-zero, this formula is well-defined and an inverse matrix exists.
Answer: By showing that each statement implies the next in a cycle, we prove they are all logically equivalent.
4.19. Compute Rank (Tutorial 5, Task 7)

Find the rank of these matrices: \(A = \begin{bmatrix} 1 & 2 & 0 \\ 3 & 6 & 1 \\ 2 & 4 & -1 \end{bmatrix}\), \(B = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ 1 & 3 & 5 & 7 \end{bmatrix}\), \(C = \begin{bmatrix} 2 & 4 & 6 \\ 1 & 1 & 1 \\ 0 & 1 & 2 \end{bmatrix}\)

Click to see the solution

We use Gaussian elimination to find the row echelon form. The rank is the number of non-zero rows (pivots).

  1. Matrix A:
    • \(\begin{bmatrix} 1 & 2 & 0 \\ 3 & 6 & 1 \\ 2 & 4 & -1 \end{bmatrix} \xrightarrow[R_3 \to R_3 - 2R_1]{R_2 \to R_2 - 3R_1} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & -1 \end{bmatrix} \xrightarrow{R_3 \to R_3 + R_2} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\)
    • There are 2 non-zero rows. Rank(A) = 2.
  2. Matrix B:
    • \(\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ 1 & 3 & 5 & 7 \end{bmatrix} \xrightarrow[R_3 \to R_3 - R_1]{R_2 \to R_2 - R_1} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \\ 0 & 2 & 4 & 6 \end{bmatrix} \xrightarrow{R_3 \to R_3 - 2R_2} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix}\)
    • There are 2 non-zero rows. Rank(B) = 2.
  3. Matrix C:
    • \(\begin{bmatrix} 2 & 4 & 6 \\ 1 & 1 & 1 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{bmatrix} 1 & 1 & 1 \\ 2 & 4 & 6 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_2 \to R_2 - 2R_1} \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 4 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_3 \to R_3 - \frac{1}{2}R_2} \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 4 \\ 0 & 0 & 0 \end{bmatrix}\)
    • There are 2 non-zero rows. Rank(C) = 2.
Answer: The ranks are Rank(A) = 2, Rank(B) = 2, and Rank(C) = 2.
4.20. Rank Inequalities (Tutorial 5, Task 8)

Let \(A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\), \(B = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\). Verify the rank inequalities:

  1. \(\text{rank}(A+B) \le \text{rank}(A) + \text{rank}(B)\)
  2. \(\text{rank}(AB) \le \min(\text{rank}(A), \text{rank}(B))\)
  3. \(\text{rank}(A) + \text{rank}(B) - n \le \text{rank}(AB)\) (Sylvester)
Click to see the solution

First, we find the ranks of the given matrices:

  • \(\text{Rank}(A) = 1\) (one non-zero row/column)
  • \(\text{Rank}(B) = 1\) (one non-zero row/column)
  • The matrix size is \(n=2\).
  1. Verify inequality (1):
    • \(A + B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\).
    • \(\text{rank}(A+B) = 2\).
    • The inequality is \(2 \le 1 + 1\), which is \(2 \le 2\). This is true.
  2. Verify inequality (2):
    • \(AB = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}\).
    • \(\text{rank}(AB) = 0\).
    • \(\min(\text{rank}(A), \text{rank}(B)) = \min(1, 1) = 1\).
    • The inequality is \(0 \le 1\). This is true.
  3. Verify inequality (3) (Sylvester’s Inequality):
    • \(\text{rank}(A) + \text{rank}(B) - n = 1 + 1 - 2 = 0\).
    • \(\text{rank}(AB) = 0\).
    • The inequality is \(0 \le 0\). This is true.
Answer: All three rank inequalities hold for the given matrices.
4.21. Rank and Determinant (Tutorial 5, Task 10)

For each matrix, compute the rank and determinant, and determine invertibility: \(D = \begin{bmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{bmatrix}\), \(E = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{bmatrix}\), \(F = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}\)

Click to see the solution
  1. Matrix D:
    • Determinant: \(\det(D) = 2(2\cdot2 - 1\cdot1) - 1(1\cdot2 - 1\cdot1) + 1(1\cdot1 - 2\cdot1) = 2(3) - 1(1) + 1(-1) = 6 - 1 - 1 = 4\).
    • Rank: Since the determinant of the \(3 \times 3\) matrix is non-zero, it has full rank. Rank(D) = 3.
    • Invertibility: Since \(\det(D) \neq 0\), the matrix is invertible.
  2. Matrix E:
    • Determinant: The second row is twice the first row, so the rows are linearly dependent. Therefore, \(\det(E) = 0\).
    • Rank: Since the determinant is 0, the rank must be less than 3. The first and third rows, \(\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}\) and \(\begin{bmatrix} 1 & 1 & 1 \end{bmatrix}\), are not multiples of each other, so they are linearly independent. Thus, the rank is at least 2. Rank(E) = 2.
    • Invertibility: Since \(\det(E) = 0\), the matrix is not invertible.
  3. Matrix F:
    • Determinant: F is a diagonal matrix, so its determinant is the product of the diagonal elements: \(\det(F) = 1 \cdot 2 \cdot 3 = 6\).
    • Rank: Since the determinant is non-zero, the matrix has full rank. Rank(F) = 3.
    • Invertibility: Since \(\det(F) \neq 0\), the matrix is invertible.

Answer:

  • D: Rank = 3, Determinant = 4, Invertible.
  • E: Rank = 2, Determinant = 0, Not Invertible.
  • F: Rank = 3, Determinant = 6, Invertible.